Deterministic Discrepancy Minimization via the Multiplicative Weight Update Method
نویسندگان
چکیده
A well-known theorem of Spencer shows that any set system with n sets over n elements admits a coloring of discrepancy O( √ n). While the original proof was non-constructive, recent progress brought polynomial time algorithms by Bansal, Lovett and Meka, and Rothvoss. All those algorithms are randomized, even though Bansal’s algorithm admitted a complicated derandomization. We propose an elegant deterministic polynomial time algorithm that is inspired by Lovett-Meka as well as the Multiplicative Weight Update method. The algorithm iteratively updates a fractional coloring while controlling the exponential weights that are assigned to the set constraints. A conjecture by Meka suggests that Spencer’s bound can be generalized to symmetric matrices. We prove that n × n matrices that are block diagonal with block size q admit a coloring of discrepancy O( √ n · √ log(q)). Bansal, Dadush and Garg recently gave a randomized algorithm to find a vector x with entries in {−1, 1} with ‖Ax‖∞ ≤ O( √ log n) in polynomial time, where A is any matrix whose columns have length at most 1. We show that our method can be used to deterministically obtain such a vector.
منابع مشابه
Parallelized Solution to Semidefinite Programmings in Quantum Complexity Theory
In this paper we present an equilibrium value based framework for solving SDPs via the multiplicative weight update method which is different from the one in Kale’s thesis [Kal07]. One of the main advantages of the new framework is that we can guarantee the convertibility from approximate to exact feasibility in a much more general class of SDPs than previous result. Another advantage is the de...
متن کاملweight updates : LP solving , Portfolio Management
Today we see how to use the multiplicative weight update method to solve other problems. In many settings there is a natural way to make local improvements that ”make sense.” The multiplicative weight updates analysis from last time (via a simple potential function) allows us to understand and analyse the net effect of such sensible improvements. (Formally, what we are doing in many settings is...
متن کاملPattern Recognition as a Deterministic Problem: An Approach Based on Discrepancy
When the position of each input vector in the training set is not fixed beforehand, a deterministic approach can be adopted to face with the general problem of learning. In particular, the consistency of the Empirical Risk Minimization (ERM) principle can be established, when the points in the input space are generated through a purely deterministic algorithm (deterministic learning). When the ...
متن کاملLecture 10: Applications of Multiplicative Weight Updates: Lp Solving, Portfolio Management
Today we see how to use the multiplicative weight update method to solve other problems. In many settings there is a natural way to make local improvements that “make sense.”The multiplicative weight updates analysis from last time (via a simple potential function) allows us to understand and analyse the net effect of such sensible improvements. (Formally, what we are doing in many settings is ...
متن کاملBackpropagation Convergence via Deterministic Nonmonotone Perturbed Minimization
The fundamental backpropagation (BP) algorithm for training artificial neural networks is cast as a deterministic nonmonotone perturbed gradient method. Under certain natural assumptions, such as the series of learning rates diverging while the series of their squares converging, it is established that every accumulation point of the online BP iterates is a stationary point of the BP error func...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2017